7 research outputs found
Conditional Adversarial Camera Model Anonymization
The model of camera that was used to capture a particular photographic image
(model attribution) is typically inferred from high-frequency model-specific
artifacts present within the image. Model anonymization is the process of
transforming these artifacts such that the apparent capture model is changed.
We propose a conditional adversarial approach for learning such
transformations. In contrast to previous works, we cast model anonymization as
the process of transforming both high and low spatial frequency information. We
augment the objective with the loss from a pre-trained dual-stream model
attribution classifier, which constrains the generative network to transform
the full range of artifacts. Quantitative comparisons demonstrate the efficacy
of our framework in a restrictive non-interactive black-box setting.Comment: ECCV 2020 - Advances in Image Manipulation workshop (AIM 2020
Faceless Person Recognition: Privacy Implications in Social Media
As we shift more of our lives into the virtual domain, the volume of data
shared on the web keeps increasing and presents a threat to our privacy. This
works contributes to the understanding of privacy implications of such data
sharing by analysing how well people are recognisable in social media data. To
facilitate a systematic study we define a number of scenarios considering
factors such as how many heads of a person are tagged and if those heads are
obfuscated or not. We propose a robust person recognition system that can
handle large variations in pose and clothing, and can be trained with few
training samples. Our results indicate that a handful of images is enough to
threaten users' privacy, even in the presence of obfuscation. We show detailed
experimental results, and discuss their implications.Comment: Accepted to ECCV'1